On Convergence and Stability of GANs
نویسندگان
چکیده
Generative Adversarial Networks have emerged as an effective technique for estimating data distributions. The basic setup consists of two deep networks playing against each other in a zero-sum game setting. However, it is not understood if the networks reach an equilibrium eventually and what dynamics makes this possible. The current GAN training procedure, which involves simultaneous gradient descent, lacks a clear game-theoretic justification in the literature. In this paper, we introduce regret minimization as a technique to reach equilibrium in games and use this to motivate the use of simultaneous GD in GANs. In addition, we present a hypothesis that mode collapse, which is a common occurrence in GAN training, happens due to the existence of spurious local equilibria in non-convex games. Motivated by these insights, we develop an algorithm called DRAGAN that is fast, simple to implement and achieves competitive performance in a stable fashion across different architectures, datasets (MNIST, CIFAR-10, and CelebA), and divergence measures with almost no hyperparameter tuning.
منابع مشابه
Analysis of Nonautonomous Adversarial Systems
Generative adversarial networks are used to generate images but still their convergence properties are not well understood. There have been a few studies who intended to investigate the stability properties of GANs as a dynamical system. This short writing can be seen in that direction. Among the proposed methods for stabilizing training of GANs, β-GAN was the first who proposed a complete anne...
متن کاملHigh-Resolution Deep Convolutional Generative Adversarial Networks
Generative Adversarial Networks (GANs) [7] convergence in a high-resolution setting with a computational constrain of GPU memory capacity (from 12GB to 24 GB) has been beset with difficulty due to the known lack of convergence rate stability. In order to boost network convergence of DCGAN (Deep Convolutional Generative Adversarial Networks) [14] and achieve good-looking high-resolution results ...
متن کاملConvergence, Consistency and Stability in Fuzzy Differential Equations
In this paper, we consider First-order fuzzy differential equations with initial value conditions. The convergence, consistency and stability of difference method for approximating the solution of fuzzy differential equations involving generalized H-differentiability, are studied. Then the local truncation error is defined and sufficient conditions for convergence, consistency and stability of ...
متن کاملConvergence and Stability of Modified Random SP-Iteration for A Generalized Asymptotically Quasi-Nonexpansive Mappings
The purpose of this paper is to study the convergence and the almost sure T-stability of the modied SP-type random iterative algorithm in a separable Banach spaces. The Bochner in-tegrability of andom xed points of this kind of random operators, the convergence and the almost sure T-stability for this kind of generalized asymptotically quasi-nonexpansive random mappings are obtained. Our result...
متن کاملThe new implicit finite difference scheme for two-sided space-time fractional partial differential equation
Fractional order partial differential equations are generalizations of classical partial differential equations. Increasingly, these models are used in applications such as fluid flow, finance and others. In this paper we examine some practical numerical methods to solve a class of initial- boundary value fractional partial differential equations with variable coefficients on a finite domain. S...
متن کاملStability and convergence theorems of pointwise asymptotically nonexpansive random operator in Banach space
In this paper, we prove the existence of a random fixed point of by using pointwise asymptotically nonexpansive random operator and the stability resultsof two iterative schemes for random operator.
متن کامل